klotz: key-value cache* + multiquery attention* + autoregressive language models* + cross-layer attention* + machine learning* + csail* + attention* + mit* + transformer*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This paper introduces Cross-Layer Attention (CLA), an extension of Multi-Query Attention (MQA) and Grouped-Query Attention (GQA) for reducing the size of the key-value cache in transformer-based autoregressive large language models (LLMs). The authors demonstrate that CLA can reduce the cache size by another 2x while maintaining nearly the same accuracy as unmodified MQA, enabling inference with longer sequence lengths and larger batch sizes.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: key-value cache + multiquery attention + autoregressive language models + cross-layer attention + machine learning + csail + attention + mit + transformer

About - Propulsed by SemanticScuttle